Search results for "Gibbs sampling"
showing 10 items of 18 documents
Multi-label Classification Using Stacked Hierarchical Dirichlet Processes with Reduced Sampling Complexity
2018
Nonparametric topic models based on hierarchical Dirichlet processes (HDPs) allow for the number of topics to be automatically discovered from the data. The computational complexity of standard Gibbs sampling techniques for model training is linear in the number of topics. Recently, it was reduced to be linear in the number of topics per word using a technique called alias sampling combined with Metropolis Hastings (MH) sampling. We propose a different proposal distribution for the MH step based on the observation that distributions on the upper hierarchy level change slower than the document-specific distributions at the lower level. This reduces the sampling complexity, making it linear i…
Statistical inference and Monte Carlo algorithms
1996
This review article looks at a small part of the picture of the interrelationship between statistical theory and computational algorithms, especially the Gibbs sampler and the Accept-Reject algorithm. We pay particular attention to how the methodologies affect and complement each other.
Bayesian Smoothing in the Estimation of the Pair Potential Function of Gibbs Point Processes
1999
A flexible Bayesian method is suggested for the pair potential estimation with a high-dimensional parameter space. The method is based on a Bayesian smoothing technique, commonly applied in statistical image analysis. For the calculation of the posterior mode estimator a new Monte Carlo algorithm is developed. The method is illustrated through examples with both real and simulated data, and its extension into truly nonparametric pair potential estimation is discussed.
Online Sparse Collapsed Hybrid Variational-Gibbs Algorithm for Hierarchical Dirichlet Process Topic Models
2017
Topic models for text analysis are most commonly trained using either Gibbs sampling or variational Bayes. Recently, hybrid variational-Gibbs algorithms have been found to combine the best of both worlds. Variational algorithms are fast to converge and more efficient for inference on new documents. Gibbs sampling enables sparse updates since each token is only associated with one topic instead of a distribution over all topics. Additionally, Gibbs sampling is unbiased. Although Gibbs sampling takes longer to converge, it is guaranteed to arrive at the true posterior after infinitely many iterations. By combining the two methods it is possible to reduce the bias of variational methods while …
Grapham: Graphical models with adaptive random walk Metropolis algorithms
2008
Recently developed adaptive Markov chain Monte Carlo (MCMC) methods have been applied successfully to many problems in Bayesian statistics. Grapham is a new open source implementation covering several such methods, with emphasis on graphical models for directed acyclic graphs. The implemented algorithms include the seminal Adaptive Metropolis algorithm adjusting the proposal covariance according to the history of the chain and a Metropolis algorithm adjusting the proposal scale based on the observed acceptance probability. Different variants of the algorithms allow one, for example, to use these two algorithms together, employ delayed rejection and adjust several parameters of the algorithm…
Poisson Regression with Change-Point Prior in the Modelling of Disease Risk around a Point Source
2003
Bayesian estimation of the risk of a disease around a known point source of exposure is considered. The minimal requirements for data are that cases and populations at risk are known for a fixed set of concentric annuli around the point source, and each annulus has a uniquely defined distance from the source. The conventional Poisson likelihood is assumed for the counts of disease cases in each annular zone with zone-specific relative risk and parameters and, conditional on the risks, the counts are considered to be independent. The prior for the relative risk parameters is assumed to be piecewise constant at the distance having a known number of components. This prior is the well-known cha…
Contributed discussion on article by Pratola
2016
The author should be commended for his outstanding contribution to the literature on Bayesian regression tree models. The author introduces three innovative sampling approaches which allow for efficient traversal of the model space. In this response, we add a fourth alternative.
Efficient anomaly detection on sampled data streams with contaminated phase I data
2020
International audience; Control chart algorithms aim to monitor a process over time. This process consists of two phases. Phase I, also called the learning phase, estimates the normal process parameters, then in Phase II, anomalies are detected. However, the learning phase itself can contain contaminated data such as outliers. If left undetected, they can jeopardize the accuracy of the whole chart by affecting the computed parameters, which leads to faulty classifications and defective data analysis results. This problem becomes more severe when the analysis is done on a sample of the data rather than the whole data. To avoid such a situation, Phase I quality must be guaranteed. The purpose…
Adaptive independent sticky MCMC algorithms
2018
In this work, we introduce a novel class of adaptive Monte Carlo methods, called adaptive independent sticky MCMC algorithms, for efficient sampling from a generic target probability density function (pdf). The new class of algorithms employs adaptive non-parametric proposal densities which become closer and closer to the target as the number of iterations increases. The proposal pdf is built using interpolation procedures based on a set of support points which is constructed iteratively based on previously drawn samples. The algorithm's efficiency is ensured by a test that controls the evolution of the set of support points. This extra stage controls the computational cost and the converge…
Establishing some order amongst exact approximations of MCMCs
2016
Exact approximations of Markov chain Monte Carlo (MCMC) algorithms are a general emerging class of sampling algorithms. One of the main ideas behind exact approximations consists of replacing intractable quantities required to run standard MCMC algorithms, such as the target probability density in a Metropolis-Hastings algorithm, with estimators. Perhaps surprisingly, such approximations lead to powerful algorithms which are exact in the sense that they are guaranteed to have correct limiting distributions. In this paper we discover a general framework which allows one to compare, or order, performance measures of two implementations of such algorithms. In particular, we establish an order …